The article discusses the urgent need for global cooperation in ensuring the safety of artificial intelligence (AI) as it becomes increasingly powerful and potentially dangerous. Drawing parallels to the Pugwash Conferences that addressed nuclear weapons during the Cold War, the piece highlights a recent initiative called the International Dialogues on AI Safety, which brings together leading AI scientists from both China and the West. This initiative aims to foster dialogue and develop a consensus on AI safety as a global public good. The article emphasizes that the rapid advancements in AI capabilities pose existential risks, including the potential loss of human control and malicious uses of AI systems. To address these risks, the scientists involved in the dialogues have proposed three main recommendations: 1. **Emergency Preparedness Agreements and Institutions**: The establishment of an international body to facilitate collaboration among AI safety authorities is crucial. This body would help states agree on necessary technical and institutional measures to prepare for advanced AI systems, ensuring a minimal set of effective safety preparedness measures is adopted globally. 2. **Safety Assurance Framework**: Developers of frontier AI must demonstrate that their systems do not cross defined red lines, such as those that could lead to autonomous replication or the creation of weapons of mass destruction. This framework would require rigorous testing and evaluation, as well as post-deployment monitoring to ensure ongoing safety. 3. **Independent Global AI Safety and Verification Research**: The article calls for the creation of Global AI Safety and Verification Funds to support independent research into AI safety. This research would focus on developing verification methods that enable states to assess compliance with safety standards and frameworks. The piece concludes by underscoring the importance of a collective effort among scientists, states, and other stakeholders to navigate the challenges posed by AI. It stresses that the ethical responsibility of scientists, who understand the technology's implications, is vital in correcting the current imbalance in AI development, which is heavily influenced by profit-driven motives and national security concerns. The article advocates for a proactive approach to ensure that AI serves humanity's best interests while mitigating its risks.